137 research outputs found

    Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

    Get PDF
    In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion

    Perception of Loudness Is Influenced by Emotion

    Get PDF
    Loudness perception is thought to be a modular system that is unaffected by other brain systems. We tested the hypothesis that loudness perception can be influenced by negative affect using a conditioning paradigm, where some auditory stimuli were paired with aversive experiences while others were not. We found that the same auditory stimulus was reported as being louder, more negative and fear-inducing when it was conditioned with an aversive experience, compared to when it was used as a control stimulus. This result provides support for an important role of emotion in auditory perception

    Top-down and bottom-up modulation in processing bimodal face/voice stimuli

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Processing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks.</p> <p>Results</p> <p>We investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli.</p> <p>Conclusions</p> <p>Our data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.</p

    Multisensory Integration and Attention in Autism Spectrum Disorder: Evidence from Event-Related Potentials

    Get PDF
    Successful integration of various simultaneously perceived perceptual signals is crucial for social behavior. Recent findings indicate that this multisensory integration (MSI) can be modulated by attention. Theories of Autism Spectrum Disorders (ASDs) suggest that MSI is affected in this population while it remains unclear to what extent this is related to impairments in attentional capacity. In the present study Event-related potentials (ERPs) following emotionally congruent and incongruent face-voice pairs were measured in 23 high-functioning, adult ASD individuals and 24 age- and IQ-matched controls. MSI was studied while the attention of the participants was manipulated. ERPs were measured at typical auditory and visual processing peaks, namely, P2 and N170. While controls showed MSI during divided attention and easy selective attention tasks, individuals with ASD showed MSI during easy selective attention tasks only. It was concluded that individuals with ASD are able to process multisensory emotional stimuli, but this is differently modulated by attention mechanisms in these participants, especially those associated with divided attention. This atypical interaction between attention and MSI is also relevant to treatment strategies, with training of multisensory attentional control possibly being more beneficial than conventional sensory integration therapy

    Restricted Attentional Capacity within but Not between Sensory Modalities: An Individual Differences Approach

    Get PDF
    Background Most people show a remarkable deficit to report the second of two targets when presented in close temporal succession, reflecting an attentional blink (AB). An aspect of the AB that is often ignored is that there are large individual differences in the magnitude of the effect. Here we exploit these individual differences to address a long-standing question: does attention to a visual target come at a cost for attention to an auditory target (and vice versa)? More specifically, the goal of the current study was to investigate a) whether individuals with a large within-modality AB also show a large cross-modal AB, and b) whether individual differences in AB magnitude within different modalities correlate or are completely separate. Methodology/Principal Findings While minimizing differential task difficulty and chances for a task-switch to occur, a significant AB was observed when targets were both presented within the auditory or visual modality, and a positive correlation was found between individual within-modality AB magnitudes. However, neither a cross-modal AB nor a correlation between cross-modal and within-modality AB magnitudes was found. Conclusion/Significance The results provide strong evidence that a major source of attentional restriction must lie in modality-specific sensory systems rather than a central amodal system, effectively settling a long-standing debate. Individuals with a large within-modality AB may be especially committed or focused in their processing of the first target, and to some extent that tendency to focus could cross modalities, reflected in the within-modality correlation. However, what they are focusing (resource allocation, blocking of processing) is strictly within-modality as it only affects the second target on within-modality trials. The findings show that individual differences in AB magnitude can provide important information about the modular structure of human cognition

    Sensory information in perceptual-motor sequence learning: visual and/or tactile stimuli

    Get PDF
    Sequence learning in serial reaction time (SRT) tasks has been investigated mostly with unimodal stimulus presentation. This approach disregards the possibility that sequence acquisition may be guided by multiple sources of sensory information simultaneously. In the current study we trained participants in a SRT task with visual only, tactile only, or bimodal (visual and tactile) stimulus presentation. Sequence performance for the bimodal and visual only training groups was similar, while both performed better than the tactile only training group. In a subsequent transfer phase, participants from all three training groups were tested in conditions with visual, tactile, and bimodal stimulus presentation. Sequence performance between the visual only and bimodal training groups again was highly similar across these identical stimulus conditions, indicating that the addition of tactile stimuli did not benefit the bimodal training group. Additionally, comparing across identical stimulus conditions in the transfer phase showed that the lesser sequence performance from the tactile only group during training probably did not reflect a difference in sequence learning but rather just a difference in expression of the sequence knowledge

    Contribution by Polymorphonucleate Granulocytes to Elevated Gamma-Glutamyltransferase in Cystic Fibrosis Sputum

    Get PDF
    Background: Cystic fibrosis (CF) is an autosomal recessive disorder characterized by a chronic neutrophilic airways inflammation, increasing levels of oxidative stress and reduced levels of antioxidants such as glutathione (GSH). Gammaglutamyltransferase (GGT), an enzyme induced by oxidative stress and involved in the catabolism of GSH and its derivatives, is increased in the airways of CF patients with inflammation, but the possible implications of its increase have not yet been investigated in detail. Principal Findings: The present study was aimed to evaluate the origin and the biochemical characteristics of the GGT detectable in CF sputum. We found GGT activity both in neutrophils and in the fluid, the latter significantly correlating with myeloperoxidase expression. In neutrophils, GGT was associated with intracellular granules. In the fluid, gel-filtration chromatography showed the presence of two distinct GGT fractions, the first corresponding to the human plasma b-GGT fraction, the other to the free enzyme. The same fractions were also observed in the supernatant of ionomycin and fMLPactivated neutrophils. Western blot analysis confirmed the presence of a single band of GGT immunoreactive peptide in the CF sputum samples and in isolated neutrophils. Conclusions: In conclusion, our data indicate that neutrophils are able to transport and release GGT, thus increasing GGT activity in CF sputum. The prompt release of GGT may have consequences on all GGT substrates, including major inflammatory mediators such as S-nitrosoglutathione and leukotrienes, and could participate in early modulation of inflammatory response

    Neural correlates of audiovisual motion capture

    Get PDF
    Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory processing stages

    Efficient Visual Search from Synchronized Auditory Signals Requires Transient Audiovisual Events

    Get PDF
    BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony) of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps) we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time
    corecore